trust issue
Minimax-optimal trust-aware multi-armed bandits
Cai, Changxiao, Zhang, Jiacheng
Multi-armed bandit (MAB) algorithms have achieved significant success in sequential decision-making applications, under the premise that humans perfectly implement the recommended policy. However, existing methods often overlook the crucial factor of human trust in learning algorithms. When trust is lacking, humans may deviate from the recommended policy, leading to undesired learning performance. Motivated by this gap, we study the trust-aware MAB problem by integrating a dynamic trust model into the standard MAB framework. Specifically, it assumes that the recommended and actually implemented policy differs depending on human trust, which in turn evolves with the quality of the recommended policy. We establish the minimax regret in the presence of the trust issue and demonstrate the suboptimality of vanilla MAB algorithms such as the upper confidence bound (UCB) algorithm. To overcome this limitation, we introduce a novel two-stage trust-aware procedure that provably attains near-optimal statistical guarantees. A simulation study is conducted to illustrate the benefits of our proposed algorithm when dealing with the trust issue.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Michigan (0.04)
- Information Technology > Data Science > Data Mining > Big Data (1.00)
- Information Technology > Artificial Intelligence > Robots (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
The US Congress Has Trust Issues. Generative AI Is Making It Worse
When it comes to artificial intelligence, United States senators are looking to the titans of Silicon Valley to fix a Senate problem--a problem today's political class perpetuates daily with their increasingly hyper-partisan ways, which generative AI now feeds off of as it helps rewrite our collective future. Today, the Senate is hosting a first-of-its-kind, closed-door AI forum led by the likes of Elon Musk, Mark Zuckerberg, Bill Gates, and more than 17 others, including ethicists and academics. Even though they'll be on the senators' turf, for roughly six hours, they'll get microphones while the nation's elected leaders get muzzled. "All Senators are encouraged to attend to listen to this important discussion, but please note the format will not afford Senators the opportunity to provide remarks or to ask questions to the speakers," a notice from majority leader Chuck Schumer reads. As generative AI is poised to flood the internet with more--and more convincing--disinformation and misinformation, many AI experts say the top goal of the Senate should be restoring faith in, well, the Senate itself.
- North America > United States > California (0.26)
- North America > United States > Michigan (0.06)
- North America > United States > Maryland (0.06)
The C-Suite has Trust Issues with AI
This post was originally published in Harvard Business Review. Despite rising investments in artificial intelligence (AI) by today's enterprises, trust in the insights delivered by AI can be a hit or a miss with the C-suite. Are executives just resisting a new, unknown, and still unproven technology, or their hesitancy is rooted in something deeper? Executives have long resisted data analytics for higher-level decision-making, and have always preferred to rely on gut-level decision-making based on field experience to AI-assisted decisions. AI has been adopted widely for tactical, lower-level decision-making in many industries -- credit scoring, upselling recommendations, chatbots, or managing machine performance are examples where it is being successfully deployed.
- North America > United States > California (0.05)
- Europe > Switzerland (0.05)
Artificial Intelligence's Biggest Stumbling Block: Trust
Management guru W. Edwards Deming famously said: "In God we trust. All others must bring data." But how far can we trust the data? This is becoming an important question, as the artificial intelligence systems now being built and deployed across the business landscape are only as good as the data being fed into them, along with the algorithms running the data. AI systems are now making decisions on customer value, courses of action, and operational viability, just to name a few vital functions.
Trust Issues: Uncertainty Estimation Does Not Enable Reliable OOD Detection On Medical Tabular Data
Ulmer, Dennis, Meijerink, Lotta, Cinà, Giovanni
When deploying machine learning models in high-stakes real-world environments such as health care, it is crucial to accurately assess the uncertainty concerning a model's prediction on abnormal inputs. However, there is a scarcity of literature analyzing this problem on medical data, especially on mixed-type tabular data such as Electronic Health Records. We close this gap by presenting a series of tests including a large variety of contemporary uncertainty estimation techniques, in order to determine whether they are able to identify out-of-distribution (OOD) patients. In contrast to previous work, we design tests on realistic and clinically relevant OOD groups, and run experiments on real-world medical data. We find that almost all techniques fail to achieve convincing results, partly disagreeing with earlier findings.
- Oceania > Australia > New South Wales > Sydney (0.04)
- North America > United States > New York > Richmond County > New York City (0.04)
- North America > United States > New York > Queens County > New York City (0.04)
- (13 more...)
- Health & Medicine > Diagnostic Medicine (0.68)
- Health & Medicine > Health Care Technology > Medical Record (0.54)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.46)
- Health & Medicine > Therapeutic Area > Endocrinology (0.46)
Tackling Trust in Machine Learning and Neural Networks: See It to Believe It
SAN FRANCISCO – Issues of explainability, interpretability, and regulatory compliance all share one thing in common: they contribute to a marked distrust of advanced machine learning and neural networks. Although it's not always easy to understand the various weights and measures that determine the outcomes of these predictive artificial intelligence models, the actions based on their results are usually perfectly clear. By focusing on those actions--such as what decisions models made about options for supply chain management, patient care, or product offers--organizations can not only validate the worth of these techniques, but also develop much needed trust in them. "It's more of a trust issue," admits Geoff Annesley, One Network EVP. "The people that are using [AI platforms], they may be a little cynical when they start out. But what we find is that they quickly start to trust the decisions because they see well, I'm running in parallel here when the decisions are made and it's beating me."
Nvidia has created the first video game demo using AI-generated graphics
The recent boom in artificial intelligence has produced impressive results in a somewhat surprising realm: the world of image and video generation. The latest example comes from chip designer Nvidia, which today published research showing how AI-generated visuals can be combined with a traditional video game engine. The result is a hybrid graphics system that could one day be used in video games, movies, and virtual reality. "It's a new way to render video content using deep learning," Nvidia's vice president of applied deep learning, Bryan Catanzaro, told The Verge. "Obviously Nvidia cares a lot about generating graphics [and] we're thinking about how AI is going to revolutionize the field."
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology > Hardware (1.00)
AI Still Has Trust Issues
A lot has been accomplished in the last year to improve comprehension, accuracy and scalability of artificial intelligence, but 2019 will see efforts focused on eliminating bias and making decision making more transparent. Jeff Welser, vice president at IBM Research, says the organization has hit several AI milestones in the past year and is predicting three key areas of focus for 2019. Bringing cognitive solutions powered by AI to a platform businesses can easily adopt is a strategic business imperative for the company, he said, while also increasing understanding of AI and addressing issues such as bias and trust. When it comes to advancing AI, Welser said there's been progress in several areas, including comprehension of speech and analyzing images. IBM's Project Debater work has been able to extend current AI speech comprehension capabilities beyond simple question answering tasks, enabling machines to better understand when people are making arguments, he said, and taking it beyond just "search on steroids."
Don't Trust Artificial Intelligence? Time To Open The AI 'Black Box'
Despite its promise, the growing field of Artificial Intelligence (AI) is experiencing a variety of growing pains. In addition to the problem of bias I discussed in a previous article, there is also the'black box' problem: if people don't know how AI comes up with its decisions, they won't trust it. In fact, this lack of trust was at the heart of many failures of one of the best-known AI efforts: IBM Watson – in particular, Watson for Oncology. Experts were quick to single out the problem. "IBM's attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster," says Vyacheslav Polonski, Ph.D., UX researcher for Google and founder of Avantgarde Analytics.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.92)
Don't Trust Artificial Intelligence? Time To Open The AI 'Black Box'
Despite its promise, the growing field of Artificial Intelligence (AI) is experiencing a variety of growing pains. In addition to the problem of bias I discussed in a previous article, there is also the'black box' problem: if people don't know how AI comes up with its decisions, they won't trust it. In fact, this lack of trust was at the heart of many failures of one of the best-known AI efforts: IBM Watson – in particular, Watson for Oncology. Experts were quick to single out the problem. "IBM's attempt to promote its supercomputer programme to cancer doctors (Watson for Oncology) was a PR disaster," says Vyacheslav Polonski, PhD, UX researcher for Google and founder of Avantgarde Analytics.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area > Oncology (0.92)